6 - 27.4.2. Inductive Logic Programming: Top-Down Inductive Learning: FOIL [ID:30397]
50 von 141 angezeigt

So, say we have the right background knowledge for the moment.

How do we do this learning thing?

Well in one way is by something we call top-down inductive learning.

And that's kind of the opposite way to decision tree learning.

In decision tree learning, basically we start from the observations and add,

and we work backwards from those, so we grow the decision tree until it covers all of the examples.

And what we're going to do here is we essentially start, we do similar things by starting with a very simple hypothesis

and specialize that, going from top down.

So how does that work for the grandparents?

So you have positive and negative examples, just like we talked about, and then we construct a set of horn clauses

as our representation of the hypothesis.

And we'll start out with a very simple one, grandfather of XY.

Or in other words, everybody is the grandfather of everybody else.

You can imagine this is not the last hypothesis we'll have.

And then we'll just test that and see how does that work?

Well, obviously it doesn't.

Why?

Because it's actually very good.

It classifies all the positive examples correctly and all of the negative ones incorrectly.

So by that we know we have to do something.

We have to specialize it.

And there's a couple of things we can do.

We can say if X is a father of Y, then X is a grandfather of Y.

That would be one thing to do.

And if X is a parent of Y, Z, then X is a grandfather of Y.

That's kind of the right spirit because it introduces the Z.

And so on.

There's lots of stuff you can do.

You can say, well, if U is married to V, then X is a grandfather of Y.

There are no limitations.

And we can see that if we, for some reason, prefer the third clause,

it's totally unclear why we would do that for the moment,

and then if we can specialize further, then we can get something like if X is a father of Z

and Z is a parent of Y, then we have X being a grandfather of Y.

So that's the spirit of doing things.

So how do we do that?

Well, that's the FOIL algorithm, which takes a couple of examples and a target predicate.

FOIL is essentially something where you get a couple of examples and say,

I want to generate a prolog predicate for grandfather.

Programming from examples, that was the idea.

Remember, I talked about prolog and the big hopes that the AI community had for prolog by saying,

well, we don't have to program anymore.

Specifications are enough.

That didn't quite work out, but the theory was nice.

So once people have said, well, we don't actually need to program.

We only need specifications.

Then they said, well, why not be ambitious and say, well, it should be enough to have examples.

We can derive the specification from that.

The specification is the program.

So we could turn over programming to the cleaning ladies.

Teil eines Kapitels:
Chapter 27. Knowledge in Learning

Zugänglich über

Offener Zugang

Dauer

00:18:48 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 08:07:46

Sprache

en-US

Explanation of Top-Down Inductive Learning with the FOIL algorithm. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen